Do animals have a theory of mind?

 

Greg Detre

Friday, June 01, 2001

Prof. Marian Dawkins

Animal Behaviour VI

 

What is �theory of mind�?

An animal has a �theory of mind� when it can form a representation of the beliefs, desires and capabilities of other animals, and so predict their behaviour and the probable consequences of their actions in an internal model. Or, as Heyes (1998) puts it:

An animal with a theory of mind believes that mental states play a causal role in generating behavior and infers the presence of mental states in others by observing their appearance and behavior under various circumstances.

He lists the variety of related terms which ethologists use in place of, or as aspects of, �theory of mind�:

"Machiavellian intelligence" (Byrne & Whiten 1988; Whiten & Byrne 1988), "metarepresentation" (Whiten & Byrne 1991), "metacognition" (Povinelli 1993), "mind reading" (Krebs & Dawkins 1984; Whiten 1991), "mental state attribution" (Cheney & Seyfarth 1990a; 1990b; 1992) and "pan- or pongo-morphism" (Povinelli 1995).

The question of theory of mind also arises in human developmental psychology, in deciding how early normal human infants� cognitive abilities develop, and in comparison with the impaired or retarded development of mentally handicapped, e.g. autistic, children. Unfortunately, ethologists are still wrangling to some extent over whether any animals have a theory of mind at all.

 

Methodological issues

I will first discuss the methodological issues involved in empirically determining whether animals have theory of mind, then attempt a survey and critique of the evidence for and against.

The hardest part about discerning whether animals have a theory of mind is designing the experiments. Animal behaviour is often complex and varied, and can appear to be goal-directed and highly-attuned to how other animals will react. Yet, this sort of behaviour might well be innate, or learned through trial and error.

 

Intentionality

Byrne adopts Dennett�s levels of �intentionality� as the best means of classifying �theory of mindedness�. Here, he means intentionality in the philosophical sense, of �aboutness� � see Stanford entry, on ethologists� confusions with consciousness

The kind of innate behaviour that has evolved to exploit other animals without the agent�s having formed any conception of why the action is beneficial for them is known as zero-order intentionality. The eyed hawk-moth is a good example. The markings on its wings, when spread, look like a pair of hawk eyes staring at the on-looking animal. The hawk-moth �knows� that when it spreads its wings at small mammals, on whom the hawk predates, they will probably mistake it for a hawk and flee. However, there appears to be no intentionality exhibited � the hawk-moth is not representing the mind and behaviour of the small mammals in its brain, and predicting their behaviour on the basis of this neural representation. This can be seen in that it uses the same trick when faced with cardboard props and actual hawks, despite the inappropriateness of the action.

First-order intentionality allows that the animal is able to predict the consequences of its action, i.e. it has formed a representation of its environment, which may even incorporate other animals as causal agents, but it does not attribute beliefs and desires to them. First-order intentionality admits learning, and the formation of a complex representation of the animal�s environment, but the associations and inferences do not go so far as allowing the animal to place itself in other situations, or to speculate on mental processes in other animals. Rather, other animals are just another part of the sensorium prompting behaviour.

It is worth noting that when ethologists talk of �deceit� in animal communication, they may mean either of these �unintentional� levels � the speech marks around �deceit� highlight the lack of second-order intentionality behind the action, in order to differentiate this technical use of the term from human deceit.

In order for an animal to be considered to possess a theory of mind, it must demonstrate second-order intentionality. Second-order intentionality is:

reserved for cases when the behaviour shows that the agent has an intention that in English would be expressed �I want him to think X� (i.e. mentally representing another�s mental state).

This is a problematic definition, since animals do not have language, and we do not want to have to impute a language of thought to them in order for them to have a theory of mind. Rather, the definition is intended to point to second-order intentionality as being those cases where only such an explanation could explain the animal�s behaviour. As we shall see, demonstrating this has proved extremely complicated, although there are certain hall-marks or criteria that could only be explained by genuine second-order intentionality.

 

Confusions and misattributions of theory of mindedness

Before then, I will consider how we can be misled into attributing theory of mind to animals which are simply very complex, highly-evolved or fast learners.

Genetic hard-wiring can potentially account for almost any stereotyped behaviour, even when it involves a long sequence of carefully-choreographed action. The example of the plover, illustrates this well. If you were a bird sitting on your eggs, and saw a cat wandering nearby, what would you do? You could hope the cat doesn�t see you (and many birds camouflage their nests carefully), you could attack the cat, but you can�t fly away and leave your eggs, and you can�t move your nest and/or eggs. The plover, which nests on the ground, has developed a technique of flying out into the open, flapping about, diving downwards, then feigning an incapacitated wing injury to distract the predator�s attention, before flying off and hoping that the cat has forgotten about the nest. A human being, in a similar position and in a plover�s body, might improvise a similar solution � but the fact that the plover�s behaviour varies little from plover to plover, without being just one example of an array of intentional behaviours, and that the plovers do not modify this behaviour according to the cat�s responses, all suggest that this intentional-seeming behaviour is entirely genetically hard-wired and non-intentional.

Learnt, non-intentional but intentional-seeming behaviour can be divided into two types: a) associative learning, or b) as a product of inferences about observable features of the situation rather than mental states (Heyes, 1993). To illustrate, �One of the female baboons at Gilgil grew particularly fond of meat, although the males do most of the hunting. A male, one who does not willingly share, caught an antelope. The female edged up to him and groomed him until he lolled back under her attentions. She then snatched the antelope carcass and ran� (observation by Strum, cited as personal communication in Jolly 1985).

The deception could have been quite accidental, i.e. the female happened to choose that time to groom the male, and then simply made a grab for the carcass when an unexpected opportunity arose to do so while he was lolling back. If the behaviour was not entirely by chance, it could still have been non-intentional. Again, assume that she chose to groom him at that moment without any further plan to grab the carcass. If this female had learnt painstakingly by trial and error after many abortive attempts that stealing from supine individuals is most likely to be successful, she might simply have formed an association between snatching food and the sight of a supine conspecific. Even if the behaviour was based on inference rather than associative learning, there need not have been any notion of mental state attached to being supine. That is, she might have inferred that her chances of successfully snatching the meat were higher based on reasoning about observable features of the situation, i.e. non-mental categories, without having formed any representation regarding posture as an indicator of mental state.

Thus, particularly intelligent animals can learn through trial and error from only a very small number of instances that a certain kind of behaviour elicits a reward, however inexplicably to them. Usually trial and error learning is facilitated when the animal is operating under rare or isolated conditions, or where the particular circumstances repeat themselves closely together, and where the reward (or punishment) is intense. In such cases, an animal (the �agent�) may learn quickly that a particular behaviour results in behaviour from another animal (the �target�) that is beneficial, and the agent will repeat that behaviour. This qualifies as �deception�, in the technical ethological sense of animal communication, but not in the intentional sense that requires a theory of mind.

It is also important to remember that an animal could have behaved in an intentional-seeming way by chance. Marc Hauser relates his observation of Tristan, whose over-amorous attempts towards Borgia cause her to scream. Her relatives chase Tristan, who suddenly stops and gives an alarm call. Borgia�s pursuing relatives scatter into the trees, but Tristan stays where he is, eventually walking away, the chase having been abandoned. Hauser speculates as to the extent that Tristan had intended the beneficial consequences. Hauser says that he could not see any signs of nearby danger, although that is not to say that Tristan might not have simply been mistaken but well-meaning with his call. Since Tristan never exhibited similar behaviour again over the course of the study, Hauser�s observation can be categorised with the �anecdotal� evidence that Whiten and Byrne accumulated in their survey of primate ethologists� observations. Such evidence is anecdotal in the sense that each observation is inconclusive and isolated, although together, Whiten and Byrne�s 253 accumulated reports along similar lines are persuasive.

There is lastly the possibility, as Byrne himself considers, of scientific mis-reporting. However, the number of cases of such intentional-seeming deception cannot all be mis-reported, though they can easily be mis-interpreted.

 

Possible ways of assessing theory of mindedness

There are various related ways in which we can assess whether an animal does have a theory of mind, or is simply employing innate behaviour or learned trial and error set-pieces. These include:

Appropriateness - How well an animal is able to adapt its behaviour according to the situation, and whether the animal continues to try behaviour that has previously seemed intentional to experimenters, now in completely inappropriate situations.

Novel situations - When faced with a wholly novel situation, an animal that is able to improvise behaviour that produces beneficial consequences,

perhaps requiring intermediary stages or the involvement of other �tool� animals to achieve a re-defined goal.

Responding to responses - It is one thing for an animal to play out an evolved behaviour that usually works, but it is much harder to work out response behaviour. It would require a whole array of set-piece behaviours, or a huge number of trial and error situations, or a genuine theory of mind.

Species-wide - By comparing the relative flexibility of behaviour of conspecifics, ethologists should be able to build up a picture of how complex the animal�s competence is. If every animal in a species employs the same range of set-pieces in response to similar situations, especially if their learning environments differ, then that would indicate an innate basis. Of course, it can be difficult to discern whether or not this is simply because that is simply the very best response to the situation. In this case, experimenters need to try and vary the situation in some way, minimise the possibility that the animal might have learned it over the course of its life, and compare how similar the behaviour of wild and captive animals is, as an indication of innateness.

Sense of self - It seems plausible that in order to be able to treat other animals as intentional agents, possessing beliefs and desires, an agent needs to be able to see itself as just such an intentional agent. This argument may actually be backwards though � it could be that my being able to see myself as an intentional agent follows from my mental representations of other animals as intentional agents. I will consider how much the mirror tests for self-recognition first devised by Gallup can tell us about theory of mindedness below.

Language - In certain cases, it may actually be possible to ask the animal what it�s belief state is. Kanzi the bonobo chimp and Alex and the grey parrot both demonstrate considerable linguistic ability � though probably not quite enough though to give conclusive evidence one way or the other about their belief states.

 

Survey of the evidence

I will now consider self-recognition, perspective-taking and false belief experiments as evidence for theory of mind. In considering this evidence, it is important to consider both the competence of the animal at the given task, and the validity of the experiment as a whole in demonstrating theory of mind. This second question is the harder to assess, and of far greater importance. In order for the validity to be affirmed, it is not simply enough for there to be no equally plausible non-mentalistic alternative, but that no non-mentalistic alternative could explain the behaviour.

 

Gallup, who first came up with the idea of the mirror test (see below) in 1969, maintains that:

knowledge of mental states in others presupposes knowledge of mental states in oneself and, therefore, that knowledge of self paves the way for an inferential knowledge of others.

After Gallup�s early experiments with chimps, he claimed that their ability to recognise their own images in mirrors has been replicated over 20 times, with a variety of species, but mainly primates.

Moreover, recent research (Reiss & Marino, 2001, �Proceedings of the National Academy of Sciences�) claims that two bottle-nosed dolphins, Presley and Tab, could recognise themselves in the mirror. They reacted to their own reflections without the social responses they displayed when seeing other dolphins, and when black marks were placed in different places on their bodies, the dolphins then swam to the mirror walls and exposed the mark to the reflective surface in order to look at it. They spent more time in front of the mirror after being marked than when they were not marked, and the first behavior when arriving at the mirror was exposing the black mark to check it out. It may be worth examining how well the experimenters were able to quantify this, since they couldn�t simply count the number of times the dolphins touched the marks, as we can with primates. Reiss claims:

This is the first conclusive study that shows that there seems to be a convergence in these abilities between the large-brained primates, including humans, and a very different animal (a dolphin) that's come from a very different background and environmental history, and has a very different body type and brain organization.

Unfortunately, even a fairly concrete demonstration of self-recognition would not amount to much of a case for a theory of mind. As Povinelli suggests, they may simply have extended their motor self-concept, rather than have developed a psychological one, i.e. they do not really recognize themselves but simply learn an equivalence between their behavior and what they see in the mirror.

 

Dorothy L. Cheney and Robert M. Seyfarth of the University of Pennsylvania have found that vervet monkeys give alarm calls on seeing a predator even if other monkeys have already seen it, too. Likewise, they found that Japanese monkey mothers do not distinguish between offspring that know or do not know about food or danger when it comes to alerting their babies to the presence of one or the other.

The evidence that chimpanzees are unable to imagine someone else�s perspective is quite strong (Povinelli, 1998). This is important, because chimps can pass the mirror test, indicating that the mirror test may not be as good an indication of theory of mind as Gallup believes. Chimpanzees pay close attention to gaze direction. Hauser describes locking gazes with a young female chimpanzee, then suddenly looking at something behind them, which causes the chimp to immediately swivel to look in that direction. Also, Frans B. M. de Waal of the Yerkes Regional Primate Research Center at Emory University reported that chimpanzees do not appear to trust the reassurance gestures of their former opponents unless such gestures are accompanied by a mutual gaze--that is, unless they stare directly into one another's eyes.

Povinelli describes an experiment where the chimps first get used to begging for food from an experimenter sitting just out of reach. When presented with two experimenters, one holding a banana and the other holding a block of wood, the chimps predictably begged for the banana. Then, the chimps were presented with pairs of experimenters in various positions, modelled on the behaviours observed in the chimps� spontaneous play, including blindfolded over the eyes, blindfolded over the mouth, wearing a bucket over her head, with her hands over her eyes or sitting with her back turned to the chimpanzee, all designed to illustrate a �seeing/not-seeing� contrast. In all but the last test, the chimps� success at choosing the experimenter who could see them was no better than chance. When Povinelli et al. set out to decide why they were more successful when the experimenter�s back was to them, they realised that it wasn�t that the chimps could tell that the experimenter could not see them, but because in training, they had been used to begging from front-on experimenters. In every test they devised, the chimps failed to discriminate between experimenters that could and could not see their begging behaviour, although they learned quickly by trial and error when their behaviour would be most successful. Povinelli contrasts these findings with Flavell et al.�s findings with human infants, who have shown children as young as two or three years seem to understand the concept of seeing in this way.

Of course, even if the chimpanzees did discriminate on the basis of which direction the person was facing, whether their eyes were covered, gaze direction and the like, that would not be sufficient to attribute a theory of mind. Indeed, it would not matter whether this belief was learned, or innate. The question that would need to be answered would again be one of validity. Experimenters need to devise a means of deciding if the chimps have merely formed a single-order representation of what situation is most likely to yield reward based on factors like the direction the person was facing, whether their eyes were covered, gaze direction and the like. In order to ascribe a theory of mind to the chimps, they would have to be taking the experimenter�s visual perspective, and moreover, assessing what the experimenter knows, i.e. forming a set of beliefs about the experimenter�s beliefs, based on visual perspective.

 

The �false belief� test was suggested by Daniel Dennett in his response to Premack & Woodruff�s seminal paper, �Does the chimpanzee have a theory of mind?�. Premack & Woodruff showed Sarah, a female chimp, videos of a human actor trying to solve problems like reaching inaccessible objects. Sarah had to choose one of two pictures, one of which depicted a solution to the problem, such as the actor reaching out of the cage to the bananas with a stick. Sarah reliably chose the picture of the solution to the problem, which Premack & Woodruff held to be evidence that Sarah was imputing mental states, such as wanting to get the bananas, to the human actor.

Dennett suggested what he thought would prove a more targeted test for theory of mind, inspired by children�s awareness of Punch�s �false belief� that Judy is in a box which he throws off a cliff. Wimmer & Perner (1983) devised the Sally-Anne false belief test for young children. Children were presented with two dolls, Sally with a basket, and Anne with a box. Sally places the marble in her basket, and leaves the room. Anne takes the marble from Sally�s basket, and moves it to her (Anne�s) box. The child is asked where Sally will look for the marble when she returns. A correct answer (in combination with correct answers to the naming, reality and memory control questions) is taken to imply that the child knows where the marble is, and is also able to recognise that Sally now holds a �false belief�. Criticisms have been made of this experiment in its current form, as applied to autistic children (who do not engage themselves in pretend play, but Leslie and Frith (1988) replicated the study using people rather than dolls, and obtained similar results), and of the false belief methodology in general as a test of the theory of mind (Bloom and German, 2000). They claim that the false belief test places high cognitive and linguistic demands on infants and animals, far beyond the possession of a theory of mind. However, they argue that the false belief task should not be abolished, for �while failure � isn�t necessarily informative about a child or animal�s conceptual abilities, success is�.

 

Discussion

It is worth noting that some philosophers claim that (some/many) animals are not capable of holding beliefs at all, let alone a theory of mind. This view takes many forms, with differing strictures. Donald Davidson (1975), for instance, �claimed that only creatures with the concepts of truth and falsehood can properly be said to have beliefs, and since these are meta-linguistic concepts � only language-using animals such as human beings can have beliefs� (from Dennett, 1995).

In contrast, Dennett and McCarthy employ a �maximally permissive understanding of the term, � simply presupposing that whatever the structure [of the information-structures in the animals' brains] is, it is sufficient to permit the sort of intelligent choice of behavior that is well-predicted from the intentional stance. So yes, animals have beliefs. Even amoebas - like thermostats - have beliefs� (Dennett, 1995).

Most philosophers would probably allow that the most cognitively advanced non-human animals have some form of belief.Janet Halperin gives the example of her Siamese fighting fish as candidates for having beliefs, for although they do seem richly amenable (in some regards) to intentional interpretation, on the other hand she has a neural-net-like model of their control systems that seems to lack any components with the features beliefs are often supposed to have. This seems like quite an important point against them. After all, if a few highly non-intentional, highly-simplified simulations of a small number of neurons can exhibit behaviour similar to the fighting fish, there is no need to attribute beliefs to them.

I want to briefly consider theory of mindedness, consciousness and belief-states all together. If we assume that consciousness is not necessary for, or plays little role in, the fact that we have belief states and second order intentionality, then it must be that these complex, higher-order mental states can be represented in entirely connectionist terms. Certainly, cognitive science research assumes that the representations being formed of other animals can be seen at a neural level, if not easily understood. And it does not seem counter-intuitive to consider that there might be animals, or robots, with theory of mind who are not conscious. A character in a computer game who responds differently on the basis of what information it has about the other characters in a game, even if those responses are generated by an algorithm, could be said to have a theory of mind, without being at all conscious. Of course, it seems likely that the two will go hand in hand, either because both seem coincide with complexity, or because one gives rise to the other.

Given this, we should expect that theory of mind can be represented in entirely functionalist terms, which means in connectionist terms, if we are talking at the neural level. The same goes for beliefs, which exhibit lower order intentionality than theory of mind. Halperin�s neural network model of her fighting fish could well have a theory of mind, if it can be understood to be processing what corresponds to the mental states of other fighting fish (or their simulations). As long as we are confident that no simple model with only single-level representation could produce the same results, then, it seems fair to ascribe a theory of mind to the neural network.

Returning to animals, I think that Byrne�s (1995) contention that the great apes definitely do have a theory of mind, but that the case for monkeys seems especially weak, and little better for dolphins or other non-primates, will probably prove correct. However, there is a need for experimenters to focus solely on experiments which yield results that cannot be explained in terms of both mentalistic and non-mentalistic processes.